Hypothesis Disparity Regularized Mutual Information Maximization
نویسندگان
چکیده
We propose a hypothesis disparity regularized mutual information maximization (HDMI) approach to tackle unsupervised transfer---as an effort towards unifying transfer learning (HTL) and domain adaptation (UDA)---where the knowledge from source is transferred solely through hypotheses adapted target in manner. In contrast prevalent HTL UDA approaches that typically use single hypothesis, HDMI employs multiple leverage underlying distributions of hypotheses. To better utilize crucial relationship among different hypotheses---as opposed unconstrained optimization each independently---while adapting unlabeled maximization, incorporates regularization coordinates jointly learn representations while preserving more transferable with better-calibrated prediction uncertainty. achieves state-of-the-art performance on benchmark datasets for context HTL, without need access data during adaptation.
منابع مشابه
Maximum mutual information regularized classification
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncert...
متن کاملAlignment by Maximization of Mutual Information Alignment B Y Maximization of Mutual Information
A new information-theoretic approach is presented for nding the pose of an object in an image. The technique does not require information about the surface properties of the object, besides its shape, and is robust with respect to variations of illumination. In our derivation, few assumptions are made about the nature of the imaging process. As a result the algorithms are quite general and can ...
متن کاملDiscriminative Clustering by Regularized Information Maximization
Is there a principled way to learn a probabilistic discriminative classifier from an unlabeled data set? We present a framework that simultaneously clusters the data and trains a discriminative classifier. We call it Regularized Information Maximization (RIM). RIM optimizes an intuitive information-theoretic objective function which balances class separation, class balance and classifier comple...
متن کاملStatistical mechanics of mutual information maximization
– An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed (Becker S. and Hinton G., Nature, 355 (1992) 161). By exploiting a formal analogy to supervised learning in parity machines, the theory of zero-temperature Gibbs learning for the unsupervised procedure is presented...
متن کاملInformation-Maximization Clustering Based on Squared-Loss Mutual Information
Information-maximization clustering learns a probabilistic classifier in an unsupervised manner so that mutual information between feature vectors and cluster assignments is maximized. A notable advantage of this approach is that it involves only continuous optimization of model parameters, which is substantially simpler than discrete optimization of cluster assignments. However, existing metho...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2021
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v35i9.17003